71 research outputs found

    Voice onset time variation in stop consonant to vowel transitions

    Get PDF
    Reduced duration, increased consistency, and improved intelligibility are goals of reducing the motor complexity of speech for individuals with cerebral palsy having dysarthria. In this study, measurement and analysis were made to compare an individual with spastic Cerebral Palsy (CP) having dysarthria to an individual with athetoid CP having dysarthria as well as to a non-dysarthric individual. Each participant\u27s normal speech, whispering, and speech using an artificial larynx was evaluated, utilizing the source-filter theory methodology. The plausibility of dysarthric speech duration reduction by minimizing vocalization is tested by stop consonant P,, to vowel transitions. The data suggest that speech duration is dependent on voice onset time (VOT) variation among the participants. This study could serve as a basis to encourage further research analyzing neuromotor and physiological articulatory control, which could lead to interventional treatment for individuals having dysarthria

    DEVELOPMENT OF AN ACCURATE SEIZURE DETECTION SYSTEM USING RANDOM FOREST CLASSIFIER WITH ICA BASED ARTIFACT REMOVAL ON EEG DATA

    Get PDF
    Abstract The creation of a reliable artifact removal and precise epileptic seizure identification system using Seina Scalp EEG data and cutting-edge machine learning techniques is presented in this paper. Random Forest classifier used for seizure classification, and independent component analysis (ICA) is used for artifact removal. Various artifacts, such as eye blinks, muscular activity, and environmental noise, are successfully recognized and removed from the EEG signals using ICA-based artifact removal, increasing the accuracy of the analysis that comes after. A precise distinction between seizure and non-seizure segments is made possible by the Random Forest Classifier, which was created expressly to capture the spatial and temporal patterns associated with epileptic seizures. Experimental evaluation of the Seina Scalp EEG Data demonstrates the excellent accuracy of our approach, achieving a 96% seizure identification rate A potential strategy for improving the accuracy and clinical utility of EEG-based epilepsy diagnosis is the merging of modern signal processing methods and deep learning algorithms

    Embodied Question Answering

    Full text link
    We present a new AI task -- Embodied Question Answering (EmbodiedQA) -- where an agent is spawned at a random location in a 3D environment and asked a question ("What color is the car?"). In order to answer, the agent must first intelligently navigate to explore the environment, gather information through first-person (egocentric) vision, and then answer the question ("orange"). This challenging task requires a range of AI skills -- active perception, language understanding, goal-driven navigation, commonsense reasoning, and grounding of language into actions. In this work, we develop the environments, end-to-end-trained reinforcement learning agents, and evaluation protocols for EmbodiedQA.Comment: 20 pages, 13 figures, Webpage: https://embodiedqa.org

    Evaluating Visual Conversational Agents via Cooperative Human-AI Games

    Full text link
    As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. In this work, we design a cooperative game - GuessWhich - to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE. We compare performance of the human-ALICE teams for two versions of ALICE. Our human studies suggest a counterintuitive trend - that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. This suggests a mismatch between benchmarking of AI in isolation and in the context of human-AI teams.Comment: HCOMP 201
    • …
    corecore